Exponential Forgetting and Geometric Ergodicity in Hidden Markov Models

نویسندگان

  • François Le Gland
  • Laurent Mevel
چکیده

We consider a hidden Markov model with multidimensional observations, and with misspecification, i.e. the assumed coefficients (transition probability matrix, and observation conditional densities) are possibly different from the true coefficients. Under mild assumptions on the coefficients of both the true and the assumed models, we prove that : (i) the prediction filter forgets almost surely their initial condition exponentially fast, and (ii) the extended Markov chain, whose components are : the unobserved Markov chain, the observation sequence, and the prediction filter, is geometrically ergodic, and has a unique invariant probability distribution.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Exponential forgetting and geometric ergodicity for optimal filtering in general state-space models

State-space models are a very general class of time series capable of modeling-dependent observations in a natural and interpretable way. We consider here the case where the latent process is modeled by a Markov chain taking its values in a continuous space and the observation at each point admits a distribution dependent of both the current state of the Markov chain and the past observation. I...

متن کامل

Several Types of Ergodicity for M/g/1-type Markov Chains and Markov Processes

In this paper we study polynomial and geometric (exponential) ergodicity forM/G/1-type Markov chains and Markov processes. First, practical criteria for M/G/1-type Markov chains are obtained by analyzing the generating function of the first return probability to level 0. Then the corresponding criteria for M/G/1-type Markov processes are given, using their h-approximation chains. Our method yie...

متن کامل

Exponential Forgetting and Geometric Ergodicityin Hidden Markov

We consider a hidden Markov model with multidimen-sional observations, and with misspeciication, i.e. the assumed coeecients (transition probability matrix, and observation conditional densities) are possibly diier-ent from the true coeecients. Under mild assumptions on the coeecients of both the true and the assumed models, we prove that : (i) the prediction lter forgets almost surely its init...

متن کامل

Consistency of Bayesian nonparametric Hidden Markov Models

We are interested in Bayesian nonparametric Hidden Markov Models. More precisely, we are going to prove the consistency of these models under appropriate conditions on the prior distribution and when the number of states of the Markov Chain is finite and known. Our approach is based on exponential forgetting and usual Bayesian consistency techniques.

متن کامل

M ar 2 00 7 Forgetting of the initial distribution for Hidden Markov Models 1

The forgetting of the initial distribution for discrete Hidden Markov Models (HMM) is addressed: a new set of conditions is proposed, to establish the forgetting property of the filter, at a polynomial and geometric rate. Both a pathwise-type convergence of the total variation distance of the filter started from two different initial distributions , and a convergence in expectation are consider...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • MCSS

دوره 13  شماره 

صفحات  -

تاریخ انتشار 2000